13 research outputs found

    Lossless Intra Coding in HEVC with 3-tap Filters

    Full text link
    This paper presents a pixel-by-pixel spatial prediction method for lossless intra coding within High Efficiency Video Coding (HEVC). A well-known previous pixel-by-pixel spatial prediction method uses only two neighboring pixels for prediction, based on the angular projection idea borrowed from block-based intra prediction in lossy coding. This paper explores a method which uses three neighboring pixels for prediction according to a two-dimensional correlation model, and the used neighbor pixels and prediction weights change depending on intra mode. To find the best prediction weights for each intra mode, a two-stage offline optimization algorithm is used and a number of implementation aspects are discussed to simplify the proposed prediction method. The proposed method is implemented in the HEVC reference software and experimental results show that the explored 3-tap filtering method can achieve an average 11.34% bitrate reduction over the default lossless intra coding in HEVC. The proposed method also decreases average decoding time by 12.7% while it increases average encoding time by 9.7%Comment: 10 pages, 7 figure

    Learned Lossless Image Compression Through Interpolation With Low Complexity

    Full text link
    With the increasing popularity of deep learning in image processing, many learned lossless image compression methods have been proposed recently. One group of algorithms that have shown good performance are based on learned pixel-based auto-regressive models, however, their sequential nature prevents easily parallelized computations and leads to long decoding times. Another popular group of algorithms are based on scale-based auto-regressive models and can provide competitive compression performance while also enabling simple parallelization and much shorter decoding times. However, their major drawback are the used large neural networks and high computational complexity. This paper presents an interpolation based learned lossless image compression method which falls in the scale-based auto-regressive models group. The method achieves better than or on par compression performance with the recent scale-based auto-regressive models, yet requires more than 10x less neural network parameters and encoding/decoding computation complexity. These achievements are due to the contributions/findings in the overall system and neural network architecture design, such as sharing interpolator neural networks across different scales, using separate neural networks for different parameters of the probability distribution model and performing the processing in the YCoCg-R color space instead of the RGB color space.Comment: 8 pages, 4 figures, 2 table

    Lossless Image and Intra-Frame Compression With Integer-to-Integer DST

    No full text

    Lossless intra coding in HEVC with integer-to-integer DST

    No full text
    It is desirable to support efficient lossless coding within video coding standards, which are primarily designed for lossy coding, with as little modification as possible. A simple approach is to skip transform and quantization, and directly entropy code the prediction residual, but this is inefficient for compression. A more efficient and popular approach is to process the residual block with DPCM prior to entropy coding. This paper explores an alternative approach based on processing the residual block with integer-to-integer (i2i) transforms. I2i transforms map integers to integers, however, unlike the integer transforms used in HEVC for lossy coding, they do not increase the dynamic range at the output and can be used in lossless coding. We use both an i2i DCT from the literature and a novel i2i approximation of the DST. Experiments with the HEVC reference software show competitive results.Comment: arXiv admin note: substantial text overlap with arXiv:1605.0511

    Are COVID-19-Related Economic Supports One of the Drivers of Surge in Bitcoin Market? Evidence from Linear and Non-Linear Causality Tests

    No full text
    The aim of this study was to investigate the causal relations between COVID-19 economic supports and Bitcoin markets. For this purpose, we first determined the degree of the integration of variables by implementing Fourier Augmented Dickey–Fuller unit root tests. Then, we carried out both linear (Bootstrap Toda–Yamamoto) and non-linear (Fractional Frequency Flexible Fourier form Toda–Yamamoto) causality tests to consider the nonlinearities in variables, to determine if the effects of multiple structural breaks were temporary or permanent, and to evaluate the unidirectional causality running from COVID-19-related economic supports and the price, volatility, and trading volume of Bitcoin. Our study included 158 countries, and we used daily data over the period from 1 January 2020 and 10 March 2022. The findings of this study provide evidence of unidirectional causalities running from COVID-19-related economic supports to the price, volatility, and trading volume of Bitcoin in most of the countries in the sample. The application of non-linear causality tests helped us obtain more evidence about these causalities. Some of these causalities were found to be permanent, and some of them were found to be temporary. The results of the study indicate that COVID-19-related economic supports can be considered a major driver of the surge in the Bitcoin market during the pandemic

    Video compression with 1-D directional transforms in H.264/AVC

    No full text
    Typically the same transforms, such as the 2-D Discrete Cosine Transform (DCT), are used to compress both images in image compression and prediction residuals in video compression. However, these two signals have different spatial characteristics. In, we analyzed the difference between these two signals and proposed 1-D directional transforms for prediction residuals. In this paper, we provide further experimental results using these transforms in the H.264/AVC codec and present other related information which can provide insights in understanding the use of these transforms in video coding applications

    Transforms for the Motion Compensation Residual

    Get PDF
    The Discrete-Cosine-Transform (DCT) is the most widely used transform in image and video compression. Its use in image compression is often justified by the notion that it is the statistically optimal transform for first-order Markov signals, which have been used to model images. In standard video codecs, the motion-compensation residual (MC-residual) is also compressed with the DCT. The MC-residual may, however, possess different characteristics from an image. Hence, the question that arises is if other transforms can be developed that can perform better on the MC-residual than the DCT. Inspired by recent research on direction-adaptive image transforms, we provide an adaptive auto-covariance characterization for the MC-residual that shows some statistical differences between the MC-residual and the image. Based on this characterization, we propose a set of block transforms. Experimental results indicate that these transforms can improve the compression efficiency of the MC-residual

    Directional wavelet transforms for prediction residuals in video coding

    No full text
    Various directional transforms have been developed recently to improve image compression. In video compression, however, prediction residuals of image intensities, such as the motion compensation residual or the resolution enhancement residual, are transformed. The applicability of the directional transforms on prediction residuals have not been carefully investigated. In this paper, we briefly discuss differing characteristics of prediction residuals and images, and propose directional transforms specifically designed for prediction residuals. We compare these transforms with the directional transforms proposed for images using prediction residuals. The results of the comparison indicate that our proposed directional transforms can provide better compression of prediction residuals than the directional transforms proposed for images

    1-D Transforms for the Motion Compensation Residual

    No full text
    corecore